27 research outputs found
Empirical study on the efficiency of Spiking Neural Networks with axonal delays, and algorithm-hardware benchmarking
The role of axonal synaptic delays in the efficacy and performance of
artificial neural networks has been largely unexplored. In step-based
analog-valued neural network models (ANNs), the concept is almost absent. In
their spiking neuroscience-inspired counterparts, there is hardly a systematic
account of their effects on model performance in terms of accuracy and number
of synaptic operations.This paper proposes a methodology for accounting for
axonal delays in the training loop of deep Spiking Neural Networks (SNNs),
intending to efficiently solve machine learning tasks on data with rich
temporal dependencies. We then conduct an empirical study of the effects of
axonal delays on model performance during inference for the Adding task, a
benchmark for sequential regression, and for the Spiking Heidelberg Digits
dataset (SHD), commonly used for evaluating event-driven models. Quantitative
results on the SHD show that SNNs incorporating axonal delays instead of
explicit recurrent synapses achieve state-of-the-art, over 90% test accuracy
while needing less than half trainable synapses. Additionally, we estimate the
required memory in terms of total parameters and energy consumption of
accomodating such delay-trained models on a modern neuromorphic accelerator.
These estimations are based on the number of synaptic operations and the
reference GF-22nm FDX CMOS technology. As a result, we demonstrate that a
reduced parameterization, which incorporates axonal delays, leads to
approximately 90% energy and memory reduction in digital hardware
implementations for a similar performance in the aforementioned task
Open the box of digital neuromorphic processor: Towards effective algorithm-hardware co-design
Sparse and event-driven spiking neural network (SNN) algorithms are the ideal
candidate solution for energy-efficient edge computing. Yet, with the growing
complexity of SNN algorithms, it isn't easy to properly benchmark and optimize
their computational cost without hardware in the loop. Although digital
neuromorphic processors have been widely adopted to benchmark SNN algorithms,
their black-box nature is problematic for algorithm-hardware co-optimization.
In this work, we open the black box of the digital neuromorphic processor for
algorithm designers by presenting the neuron processing instruction set and
detailed energy consumption of the SENeCA neuromorphic architecture. For
convenient benchmarking and optimization, we provide the energy cost of the
essential neuromorphic components in SENeCA, including neuron models and
learning rules. Moreover, we exploit the SENeCA's hierarchical memory and
exhibit an advantage over existing neuromorphic processors. We show the energy
efficiency of SNN algorithms for video processing and online learning, and
demonstrate the potential of our work for optimizing algorithm designs.
Overall, we present a practical approach to enable algorithm designers to
accurately benchmark SNN algorithms and pave the way towards effective
algorithm-hardware co-design
NeuroBench:Advancing Neuromorphic Computing through Collaborative, Fair and Representative Benchmarking
The field of neuromorphic computing holds great promise in terms of advancing computing efficiency and capabilities by following brain-inspired principles. However, the rich diversity of techniques employed in neuromorphic research has resulted in a lack of clear standards for benchmarking, hindering effective evaluation of the advantages and strengths of neuromorphic methods compared to traditional deep-learning-based methods. This paper presents a collaborative effort, bringing together members from academia and the industry, to define benchmarks for neuromorphic computing: NeuroBench. The goals of NeuroBench are to be a collaborative, fair, and representative benchmark suite developed by the community, for the community. In this paper, we discuss the challenges associated with benchmarking neuromorphic solutions, and outline the key features of NeuroBench. We believe that NeuroBench will be a significant step towards defining standards that can unify the goals of neuromorphic computing and drive its technological progress. Please visit neurobench.ai for the latest updates on the benchmark tasks and metrics
NeuroBench: Advancing Neuromorphic Computing through Collaborative, Fair and Representative Benchmarking
The field of neuromorphic computing holds great promise in terms of advancing
computing efficiency and capabilities by following brain-inspired principles.
However, the rich diversity of techniques employed in neuromorphic research has
resulted in a lack of clear standards for benchmarking, hindering effective
evaluation of the advantages and strengths of neuromorphic methods compared to
traditional deep-learning-based methods. This paper presents a collaborative
effort, bringing together members from academia and the industry, to define
benchmarks for neuromorphic computing: NeuroBench. The goals of NeuroBench are
to be a collaborative, fair, and representative benchmark suite developed by
the community, for the community. In this paper, we discuss the challenges
associated with benchmarking neuromorphic solutions, and outline the key
features of NeuroBench. We believe that NeuroBench will be a significant step
towards defining standards that can unify the goals of neuromorphic computing
and drive its technological progress. Please visit neurobench.ai for the latest
updates on the benchmark tasks and metrics
Adaptation and awareness for autonomic systems
EThOS - Electronic Theses Online ServiceGBUnited Kingdo
Design, analysis and performances of chemical-inspired rate controllers in packet networks
In computer networks, a Distributed Rate Controller (DRC) must quickly propagate changes of inflow rates and let participating sites converge to their admissible rate. In this paper we introduce a family of DRCs where controllers can be easily customized whereas their performance and dynamics are strictly predictable. Borrowing from engineering methods in Chemistry, we show how to derive a deterministic mathematical model of the network flow that can be analyzed through standard tools of (linear) system theory. We also report on simulation and native experimental results that validate our theoretical approach
A Common Architecture for Cross Layer and Network Context Awareness
The emerging Internet and non-Internet environments have renewed interest in flexible and adaptive communication subsystems residing in end and intermediate systems, which utilise cross layer and wider network context information. To date most cross layer solutions have been very application and/or network specific, and lack re-usability. Here we propose a common architecture to support autonomic composition of functions using generic views of information derived from lower level primitives. At its heart is a distributed Information Sensing and Sharing framework. A combination of key features of this framework are the decoupling of information collection from information use, its capability to multiplex information sources, its operational independence from any specific protocol configuration, and its use outside a node context
Facilitating functional adaptation in autonomic networks
The extensive growth and expansion of the Internet and its application has put current networking technologies under test. New application needs have repeatedly challenged the suitability of the Internet architecture’s “hourglass” model, as one size, fits all design. Research in Autonomic Networks [1] aim to provide an alternative solution for these problems by promoting the localization and customization though the use of self-* properties either at service or network level. The Autonomic Network Architecture (ANA) project [2] takes a clean-slate design approach to the problem by proposing an architecture that would enable the network to evolve and adapt functionality to the application needs within a given operation environment. Previous work [3, 9] has demonstrated the design of a lightweight service virtualization mechanism aimed at localizing network servicing in different service contexts. The work introduced in this report, enhances the functional composition framework by a service compositing model. This model is essential in enabling the implementation of autonomic features and the realization of key autonomic networking characteristics